Capitalisn't

Can Democracy Coexist With Big Tech? with Marietje Schaake

Episode Summary

International technology policy expert, Stanford University academic, and former European parliamentarian Marietje Schaake writes in her new book that a “Tech Coup” is happening in democratic societies and fast approaching the point of no return. Both Big Tech and smaller companies are participating in it, through the provision of spyware, microchips, facial recognition, and other technologies that erode privacy, speech, and other human rights. These technologies shift power to the tech companies at the expense of the public and democratic institutions, Schaake writes. Schaake joins Bethany and Luigi to discuss proposals for reversing this shift of power and maintaining the balance between innovation and regulation in the digital age. If a "tech coup" is really underway, how did we get here? And if so, how can we safeguard democracy and individual rights in an era of algorithmic governance and surveillance capitalism? Marietje Schaake’s new book, “The Tech Coup: Saving Democracy From Silicon Valley,” is available here. Read an excerpt from the book on ProMarket here.

Episode Notes

International technology policy expert, Stanford University academic, and former European parliamentarian Marietje Schaake writes in her new book that a “Tech Coup” is happening in democratic societies and fast approaching the point of no return. Both Big Tech and smaller companies are participating in it, through the provision of spyware, microchips, facial recognition, and other technologies that erode privacy, speech, and other human rights. These technologies shift power to the tech companies at the expense of the public and democratic institutions, Schaake writes.

Schaake joins Bethany and Luigi to discuss proposals for reversing this shift of power and maintaining the balance between innovation and regulation in the digital age. If a "tech coup" is really underway, how did we get here? And if so, how can we safeguard democracy and individual rights in an era of algorithmic governance and surveillance capitalism?

Marietje Schaake’s new book, “The Tech Coup: Saving Democracy From Silicon Valley,” is available here. Read an excerpt from the book on ProMarket here.

Episode Transcription

Marietje Schaake: Is innovation the highest goal? In other words, if innovation suffers but democracy wins, is that so bad?

Bethany: I’m Bethany McLean.

Phil Donahue: Did you ever have a moment of doubt about capitalism and whether greed’s a good idea?

Luigi: And I’m Luigi Zingales.

Bernie Sanders: We have socialism for the very rich, rugged individualism for the poor.

Bethany: And this is Capitalisn’t, a podcast about what is working in capitalism.

Milton Friedman: First of all, tell me, is there some society you know that doesn’t run on greed?

Luigi: And, most importantly, what isn’t.

Warren Buffett: We ought to do better by the people that get left behind. I don’t think we should kill the capitalist system in the process.

Bethany: I think we all know, intuitively, that Big Tech is not benign. From the Cambridge Analytica Facebook scandal of 2016 to the growing evidence, per our episode with Jonathan Haidt, that tech companies are deliberately and knowingly hurting our children so they can increase their profits, well, everyone can cite an example of where Big Tech has gone wrong, and that’s even before they really unleash AI on us.

Luigi: In a new book, The Tech Coup: How to Save Democracy from Silicon Valley, Dutch politician turned international policy director and fellow at several of Stanford’s institutions, Marietje Schaake, tells the story of how Silicon Valley has cultivated a hands-off approach to regulation, relying on a combination of idealism—which was, perhaps, at one point genuine—libertarian beliefs, which always ignore the role that government money is playing in fostering the growth of new technologies; and ignorance on the part of the politicians.

Now, she argues that we may be fast approaching a point of no return. She writes: “The gradual erosion of democracy in our time is being accelerated by the growing and unaccountable power of technology companies. New technologies like AI and cryptocurrencies are emerging in a regulatory vacuum and, as such, could be fatal to democracies.”

Bethany: Part of Schaake’s argument is that both technology companies and governments play a game of wink-wink, nod-nod, in which, when something like the Microsoft and CrowdStrike outage shuts airlines, hospitals, and more, it’s somehow the fault of neither one.

She writes: “We are facing an inescapable and terrifying reality. The digitization of everything has enabled the weaponization of everything. Companies build digital infrastructure, scan for risks on it, and offer services to protect it. Yet when probed about breaches or things that go wrong, they assert that governments are responsible for ensuring national security. Everyone seems to look to others to own the security question.”

Luigi: She writes: “American presidents and public officials from both parties chose to play a shockingly weak role in regulating technology companies. Ultimately, from Clinton to Biden, the legacy of the 30 years of American technology policy is one of deferential treatment and abdication of responsibility.

“The story, too, is a mixture of idealism and cynicism, the wide-eyed belief about the good and the glory of technology, but also the ways in which technology could serve political power, like by helping governments spy on their citizens and, perhaps, in the current incarnation, the total corruption of any process due to Silicon Valley lobbying power and the revolving door between companies and government agencies.”

Bethany: Schaake paints a pretty dismal picture, but she does try to offer solutions, or at least a possible path forward for how we can update our laws and adapt our regulations to match the growing power of technology companies. She writes, “This is not a book against technology but, rather, in favor of democracy.”

She argues that in many cases, such as the auto industry, regulation has fostered innovation rather than preventing it. So, Big Tech’s well-known lobbying cry that regulation will destroy everything is just an attempt to evade responsibility. She said this at Stanford: “We’re seeing governance by tech companies. The question is, with what oversight and legitimacy?”

Luigi: But we both had some questions about the book as well. The country that has done the most to regulate the internet is China. Is China then the model?

Should all of what Big Tech does be lumped into the same group? Is it as clear as she posits that regulation doesn’t stymie innovation? And why, precisely, is Big Tech so dangerous to democracy?

Here to discuss the evolution of her own thinking in her new book is Marietje Schaake herself.

Bethany: You obviously spent 10 years as a politician in Europe, and you write in your book, “The entire democratic world has been slow to build a democratic governance model for technologies.”

Why do you think Europe has approached technology differently than the US, and is their approach a roadmap? As a follow-up to that question, you write about how the Chinese Communist Party has approached technology. Is that a roadmap, too, or does that approach come fraught with problems as well?

Marietje Schaake: I served in the European Parliament with people who had lived in the Soviet Union, who had their own experiences with states using technology to keep people under control. So, we’re not talking about people who need to imagine what might happen in some dystopian scenario with technology.

That is, I think, at the core of why Europeans are prone to make sure that technology is also bound by the rule of law, which, I think, is very sensible and is a good model. But it is not rolled out to perfection yet, so there are still a lot of flaws, and there are still too many dependencies on tech companies in the EU as well, despite its role as a so-called super-regulator, that I think need to be corrected.

The second part of your question about China . . . To me, the reasons why the Communist Party is using technology as an instrument to keep its control over society is not at all a model. What I do think we can learn from the Chinese is that states remain very powerful if they wish to exert that power. And my criticism that you’ll read in the book is that democratic governments—first and foremost, the United States—have refrained from taking on that role and have not put sufficient checks on the unbridled market that has allowed technology companies to amass incredible amounts of power, whether it’s Big Tech that we’ve heard a lot about, and I think we can read examples of excess in the news every day, but it may also be smaller companies.

They’re producing highly invasive, dangerous, antidemocratic technologies like spyware, for example, that I also think should have been controlled and regulated much more stringently and much faster with the aim of protecting democracy. And I think, too often, the few measures that have been taken in the US have been economic measures like antitrust.

But I don’t think we can rely on antitrust rules to directly target the harms to democracy sufficiently. It’s almost like we hope that there’ll be a side effect for better protection of democracy coming from these economic tools. And I think more direct addressing of the harms to democracy is necessary.

Bethany: Obviously, your central and compelling thesis is that technology writ large poses a threat to democracy. Why? Is it that these companies are so powerful, and they are nonstate actors, and they don’t have any state loyalty? Is it that we don’t understand all of what they’re doing? What, in the end, makes Big Tech or just all of technology more dangerous than, say, big finance or any other big industry that sits outside of a national border these days?

Marietje Schaake: Let’s think about infrastructure. It’s anything from the data to the assessing of whether there’s risk to the infrastructure to protecting the infrastructure, protecting the data—that is all given to companies. It’s a stack of different functions, different technologies, different companies, that are all in charge of crucial parts of our lives.

Touching on national security, for example, when we think about undersea cables or data centers or microchips, touching on our privacy and our civil liberties, when it comes to the question of who has access to this data and what can be done with it.

Look at the United States now, where data of visits to abortion centers has been used to prosecute women. We see with artificial intelligence, as well, that data that has been gathered in one context—or that hasn’t even explicitly been gathered but just put on the internet—can be scraped, can be trained for facial recognition.

And these are de facto decisions that companies make about our privacy, and the combination of the power that these companies have through the data they possess, the compute power that they possess, the capital that they possess, the talent that they can hire, the lobbyists that they can hire, creates an accelerator that can keep them ahead of the curve. This is the case for Big Tech.

For small tech, sometimes it’s the nature of the product that they produce or being part of the ecosystem that I sketched that gives technology a specific character that requires the necessary checks and balances that are often missing these days.

Luigi: I would like to distinguish a bit between the private sector and the public sector. I do agree with you that we fear concentration of power, but concentration of power is different in the two situations. For example, in the diffusion of Pegasus . . . For our listeners who are not familiar, this is software that allows people to spy on what you’re doing on your phone. This is done by the public sector, which has the power to incarcerate you and do all sorts of terrible things that the private sector does not have.

And, by the way, Pegasus is not an example of Big Tech. It is an example of small tech, but it is very dangerous. And so, I’m 100 percent with you that we want to regulate the sale of this, especially to dictatorial regimes.

You seem to suggest that the solution to deal with Big Tech is to have the government regulate that. But at some level, and the Chinese example shows it, the moment that governments intervene is when they become even more powerful. If you give the government control over Big Tech, you give the government control over a bunch of data that the government, especially if it’s not benign, can use in all sorts of terrible ways.

And even if the government is allegedly democratic, we know that in democratic governments, we have a lot of people behaving in . . . against the rules and abusing their power. And so, if you are concerned that power corrupts, and absolute power corrupts absolutely, putting all your eggs in the government basket can be very dangerous.

Marietje Schaake: Well, of course, that’s not the hope that I have for solutions, that governments become more powerful to abuse technology to repress people. The idea is to make sure that technologies are ultimately accountable—the companies that operate them, but also the governments that use the commercial tech, like the spyware that you used in your example. Actually, if you look at data-protection rules, they are also there to protect people from abuse of power or negligence by governments.

Regulation does not necessarily empower a government more. In fact, regulations can put checks on the role of governments, as they put checks on the role of companies. A course correction that I think we need is something completely different than swinging the pendulum all the way to the other side and saying, “Oh, just give all the power to any governments and trust that the outcome will be good.”

Regulation is a process that can lead to better solutions. And there may well be bad regulation, there may well be good regulation, but what I’m saying is we need democratic leaders and governments to step up, to reclaim their role, before they lose insight into how these technologies work, before they lose agency and, by proxy, citizens lose agency to decide how they would like this technology to play a role in their lives and in their societies.

Luigi: Maybe this is an economist’s bias or a US bias, but when I think of regulation, I like to distinguish between regulation and liability, which I don’t see as regulation. I see it simply as the rule of law. Defining property rights. And I think a lot, in my view, can be accomplished in a less-intrusive way through a more aggressive definition of property rights.

One of the big issues, for example, when you mentioned fake news, et cetera, is the famous Section 230 of the Communications Decency Act in 1996 that exempted the big platforms from liability. As we saw with the case of Fox News and Dominion, if you are not a platform, and you lie and you diffuse fake news, you are liable. You are liable for an enormous amount of money, and that liability actually works.

I would like to discuss more why you don’t make that a centerpiece of your book. You do talk about Section 230, but it’s not at the center of your proposal. Why?

Marietje Schaake: It’s a good question. I think the discussion about Section 230 and a lot of the analysis and discussion about disinformation/freedom of expression in the US have gotten very much stuck. It’s, on the one hand, because of the power and the interpretation of the First Amendment, but it’s also because taking the direction of looking at speech and liability is one of many.

And what I’m trying to do is to say, look also at nondiscrimination, look also at the way in which governments procure technology and create dependencies that way that they could use to leverage towards values that matter. But when I look at the disinformation/freedom of expression context, especially in the United States, which is critical because some of these platforms are located there, I feel like it’s an avenue that hasn’t been very fruitful, unfortunately.

Bethany: One of the things you do in the book that’s really compelling is chart the hands-off process of US politicians and presidents toward technology. That reversed in some ways with the Biden administration, with the appointments of Tim Wu and Lina Khan, and yet, on the outside, that doesn’t appear to have changed much. So, what tale does that tell?

Marietje Schaake: Lina Khan, Tim Wu, Jonathan Kanter, they’re really trying to do what they can to course correct in a context that is, politically, incredibly difficult. We see President Biden writing op-eds about what he would like to see change through legislation because Congress cannot come together and form majorities to do it.

When I listen to both believers in regulation and those who despise it, no one has high expectations of what Washington might do to change the balance of power between tech companies and democratic authorities. I mean, some people think that’s great, some people think it’s a disaster, but I don’t know anyone who’s like, “Oh, let’s go to Washington and create change through there,” which is the context in which they have to work. I think it’s more a product of the lack of governance of this sector previously that they have to wrestle with than any lack of their own capabilities or vision because I think they have plenty.

Luigi: But actually, going back to freedom of speech, that’s exactly why I am concerned about moving a lot more power into the hands of the government. Even democratic governments don’t necessarily protect minorities. If you allow the government to decide what is acceptable speech and what is not acceptable speech, we might have a situation in which defending a free Palestine could be considered unacceptable speech.

We have seen governments interfering massively in this. That’s the reason why I prefer liability because liability is much more protective of individual rights than a democracy. Even in a perfect democracy, the majority will overrule the minority. And so, we don’t have protection of minority rights in this system.

Marietje Schaake: Well, at the moment, it’s tech companies that decide what can or cannot be said about any given political situation, whether it’s Gaza or something else. That power already rests somewhere and is used in ways that are very untransparent, sometimes hard to know. Sometimes, we learn anecdotally, but the terms that companies use to regulate their platforms—look at Twitter/X—change all the time.

And so, the question is, where can we find more accountability? I’m happy to explore the path of liability, which you’ve thought about more, but the parameters of when a company or a government or any other actor is liable still need to be shaped through rulemaking, I presume.

It’s not going to be done in any other way. So, it’s about, what goals do we have? Can we make sure that decisions that are made about such crucial freedoms are actually anchored in accountable processes? And if you think the best path is to hold companies to account through liability, that would be a part of the solution.

I think, ultimately, there will be public rules, democratic rules, needed to create that accountability, and they can go either way. The next administration may say, “Oh, we’re going to have even less accountability or liability for the platforms.” I think that would be the wrong direction, but it’s conceivable.

My aim is to ensure that democratic institutions are empowered and take their responsibility in making sure that there are guardrails, and that can be done through regulation. It can also be done through more investment in public infrastructure, public technologies. It can be done through better enforcement, something we don’t talk about enough.

Laws are presented—take the EU as an example—with big press conferences, big expectations. The GDPR, General Data Protection Regulation, is celebrated as a landmark data-protection law. But if you look at the enforcement side, it’s actually deeply disappointing.

Unfortunately, I don’t think there’s a magic wand. I don’t think there’s one single thing that can be done that will solve this problem. Also, because we’re talking about multifaceted, different types of technology at different parts of our lives . . . social media being one very visible one, but others like cybersecurity contracts that governments have, or infrastructure that spans the world is a very different angle to take, and I think one we don’t look at enough.

Bethany: Stepping back again, the rallying cry of Big Tech or technology from the beginning has been, “If you regulate us, innovation will suffer.” And that’s part of a broader theme of American business. You make a pretty convincing case in the book that, in many industries, that’s proven to be untrue, such as the auto industry.

I, on some level, have come to believe it. And maybe that’s just because I’ve grown up in this environment, and it’s sort of seeped into the water that regulation must stymie innovation. How do you think about that when it comes to technology, and especially where you sit now at Stanford, where you must hear all the time that this is going to destroy the technology industry? How do you think about that in a nuanced way?

Marietje Schaake: First of all, as you mentioned, I don’t believe it’s always true. I think many regulations have led to innovation, and I think they can. The question is, what kind of innovation? Is it the innovation that serves the VCs and the Big Tech companies in Silicon Valley? Well, maybe not. Is it the kind of innovation that serves our planet, serves balance between different people in our society, between larger and smaller companies?

What I also touch upon in the book, and I think this is more of a principled question that we have to ask ourselves, is innovation the highest goal? In other words, if innovation suffers but democracy wins, is that so bad? I don’t buy into the notion that innovation is always the most important goal to achieve.

Luigi: I agree with you that this mantra that we need to preserve innovation has been abused by lobbyists. My concern is that the way you describe it, at least the way I read it, is you have this idea of democracy as perfect and dealing with all the problems of technology that is imperfect. And so, the perfect will always win against the imperfect, but we know, in reality, that democracy is far from perfect.

To go to a different field, stem-cell research in the United States has been seriously impaired by restrictions that have been put in place democratically by democratically elected Republican presidents. I don’t think, if we do an analysis of society, that society is necessarily better off as a result of those restrictions. So, what guarantees do we have that the democratic intervention will actually make us better off?

Marietje Schaake: We don’t. We never have guarantees that regulation will turn out the right way. What I do think we can say is that if a rule is made by a publicly mandated and accountable institution, whether it’s Congress or any government agency that has that mandate, then we know through the democratic process how to reverse it. It means it can be adjusted through democratic means. When big companies make decisions that impact our lives in a certain way, we don’t have those kinds of accountability mechanisms.

We can each, I’m sure, list examples of where regulation has led to better outcomes, in our personal point of view, or to worse outcomes from our personal point of view. I’m looking at the question at the level of, how can the rule of law be empowered vis-à-vis it being disempowered and even sort of pushed aside with the status quo that we have. And so, we need to also critically assess that reality, if you ask me.

Bethany: One of the arguments you make is that new technologies like AI and crypto are essentially end game. They might be the thing that actually, fundamentally, does destroy democracy. Why? Why are they worse than what else is out there? Why are they more dangerous than everything that’s come before?

Marietje Schaake: Well, let’s take the case of AI, where there are still a lot of question marks about what it will ultimately result in.

What we see is that the companies who are in the lead when it comes to developing AI are building on a power position that they were able to build without having had many guardrails to keep them in check. It’s the companies that already have a lot of data, that have already hired a lot of talent, that already have a lot of compute, that already have big lobbying budgets, that have now built the next product on top of that capital or assets.

On the other hand, we can talk about the nature of AI that has a lot of data that can be used in many, many, many contexts and lead to unexpected outcomes, unexpected for the engineers that built the models and unanticipated for society. There’s huge risk that has been taken by putting some of these products into the markets and by continuing to develop it without that knowledge about the models being assessed more critically.

One of the examples that I mentioned that could be applied to AI and other emerging technologies is that of the precautionary principle. It’s an idea that exists in the European context where, if there is an innovation about which there are a lot of unknowns, then there’s merit in doing more research before releasing it into the wild.

I think those are the kinds of pause buttons, processes of learning about a technology in the public interest—not in the interest of the shareholders of the company, who are, of course, motivated to move fast, be faster than their competitor, have people engage when it comes to AI because the systems learn from our engagement with them.

The incentives are different, and I think we need to understand who benefits and who takes the risk. I think, in the case of AI, it’s pretty clear. Look at the AI companies that are getting investment after investment and look at the societal question marks that are still very much unanswered.

Luigi: I agree that there are a lot of externalities, but if you take the precautionary principle seriously, think about the GMOs. What are the consequences of GMOs 20 or 30 years from now? Think about the vaccines. We don’t know the long-term effects of the new vaccines. Do you want to wait 20 or 30 years to lower the effect?

If you compare, on this ground, Europe and the United States, the United States uses the system of liability in which you introduce a product, and then you’re liable if there is a problem. Europe tends to use the precautionary principle, and the result is that Europe has innovated much less in the last 20 or 30 years and is falling behind. So, I think that that will be the biggest case in favor of the lobbyists saying, “We do want to use liability, not the precautionary principle.”

Marietje Schaake: Well, but the examples that you mentioned . . . About the release of vaccines, for example, the question is, who gets to decide when they get released? And here, I think we need a more public-interest-driven process of who decides that. With a lot of tech companies, the reality is changed de facto. Before any kind of research has been done, has even . . . can even be done, we need processes where the acceptance of risk is more of a public decision than a private decision.

I worry that with tech companies like Clearview AI that have scraped the internet for billions of faces, they put a product out there that can violate all of our privacy. Companies like Facebook, from which the faces have been scraped, and others say, “Well, that was against the rules that we have on our platform,” but of course, those rules are not binding. And so, Clearview was like, “Oh, well, OK, sorry about that, but here’s our product.”

The decisions about whether that’s OK to do are all taken by companies. This whole new reality, a de facto reality about what is and is not acceptable, is created without any check or assessment from the public interest. I’m making the case to do more of those assessments from the public-interest perspective.

Luigi: I feel you said something very important about the fact that we don’t even have the knowledge to think about what the consequences are. If I’m a chemical company and I do something unique, I don’t have monopoly over the chemical elements, and people can try to reproduce what I’m doing. But if I am a social-media company or a large data company, I have a monopoly over the data, and so, nobody can see what I’m doing except myself. Nobody understands the impact except myself.

One of the solutions for this would be to mandate a sharing of the data with the public agencies that allows various researchers to do experiments in a way that they can check what these guys are doing. But this goes a bit against the overarching protection of privacy because you are going to have data from Google go to a government agency, data from Facebook go to a government agency. Where do you see this trade-off? First of all, do you agree that this could be a good direction? And, number two, how do you reconcile this tension?

Marietje Schaake: Well, sometimes companies are really good at using the privacy argument in their own favor, even though there’s no merit. So, we have to be very careful. I mean, it’s a funny kind of argument to say, “We, company A, B, or C, are better capable of handling this data. And if we release it to academics, then the privacy issue emerges.” Well, why isn’t there a privacy issue to begin with, then?

Again, talking about framing, this is really important. Just because a company says there’s a privacy issue doesn’t necessarily mean there is one . . . indeed, to have much more independent research to also verify the claims of companies. When companies say, “We are able to moderate our large language models so that they are less discriminatory,” OK, well, show us how, and let us verify. We don’t trust a car company to say, “Hey, our airbags are safe.” We want to independently check them.

Luigi: In the book, you cite a speech by the rector of the University of Amsterdam, in which she highlights the risk to academic freedom from technology companies’ outsize power in the academic world.

Can you speak more about this? It seems to me that you left the European Parliament, and you really went to the wolves’ den because Stanford probably is the place most dominated by Big Tech. Number one, why did you pick Stanford? And, number two, was there any instance in which your freedom of speech at Stanford was somewhat limited or jeopardized by pressure above that you were hurting the big daddies?

Marietje Schaake: I love those questions. I went to Stanford because I wanted to understand the politics of Silicon Valley. I think it must be understood as a political hub as much as it’s understood as a technology hub, or a VC hub, or an economic-activities hub.

I have not experienced direct challenges to what I had to say because I think people knew what they were getting. I’ve always been very transparent about what I stand for, but I do know that people disagree with me or want to make sure that there is a balance between the views that I represent and other views.

So, for example, there may have been events where there would be another speaker to make sure that it was clear that I represent one view and another speaker represents another. But I think that’s all very healthy and very normal in academic debates. I haven’t felt isolated, but I do know I represent, in many ways, a minority perspective at the university where I work. That’s right.

Bethany: Have your five years there shaped your perspective in any way? Have you gotten what you wanted in terms of understanding the politics of Silicon Valley as well?

Marietje Schaake: It’s been very informative to see the extent to which Silicon Valley is a bubble and essentially is a really small group of people. What has surprised me over the past five years, and I think is a phenomenon we’ll see more, is that, actually, people who are in national security—not so much civil-liberties activists whom I knew were worried about the outsized power of Big Tech, but increasingly, people in national security—are looking at the incredible dependence on tech companies and wondering: “Wow, have we given away too much agency? Are we capable to perform our core tasks? Is there a national-security risk in overreliance on tech?”

Of course, there are those who say, “Tech is going to save us in the competition vis-à-vis China.” So, there are different voices out there. But I would not be surprised if it was people in national security who are going to speak out decisively for more checks and balances and guardrails around tech companies. That’s something that I didn’t expect going in five years ago but that I see more and more of, and that will create different kinds of coalitions or different kinds of voices that will challenge the status quo from a variety of perspectives. That, I think, is interesting, and I’m curious to see where it will lead. Much will depend on the elections in November, of course.

Luigi: It’s a funny world if democracy is saved by the neocons.

Bethany: In the way I think about this, there’s a third player here. We’re talking about the government in a democratic process, and we’re talking about Big Tech in companies.

But the third player is the market, and that’s an incredibly powerful player, and it makes me feel a little bit hopeless about all of this because the market serves a lot of powerful interests, in terms of the global investment community, and that pressure is for more now and whatever might aid the bottom line now.

How do you, in your thinking about this, account for the power of that third player of the market, which, again, is totally unbound by any kind of democratic process and doesn’t have that as part of its consideration at all?

Marietje Schaake: Do you mean market forces, or do you mean VCs and investors in tech companies?

Bethany: I mean all of it and the way that creates incredible pressure for the bottom line.

Marietje Schaake: Right. Well, I think the whole book is a critique of the fact that this is the bottom line.

Luigi: In your book, you are very honest in saying, “I make a series of proposals, but I don’t discuss feasibility because that will basically jeopardize much of what we hope for.” But I actually tried to think about this in a paper, and I think that the hope for a better future is actually an international alliance between Europe and India.

Why Europe and India? Because together, they represent the largest mass of population and the largest amount of GDP that is not completely captured by Big Tech, either the American or the Chinese Big Tech. And so, they can do what we call a consumer union, and as a consumer union, they can impose restrictions and conditions on the producers.

How feasible do you think this union is? Number one, there are questions about how democratic the Indian regime is, but number two, does Europe really exist as an entity? My fear is that the United States, financed by Big Tech, plays a very strategic role in pitting one guy against the other because there are so many parts of Europe that it is very easy to get two of them on your side, and those two of them will stop any action. So, is it feasible to have a European or Indo-European solution, or is this just dreamland?

Marietje Schaake: I think it’s a very interesting direction to explore. It sounds to me like your idea is based on consumers, on the size of the markets and on leveraging that, which is, of course, not necessarily as you touched upon, leveraging democratic values because, unfortunately, both in India and I must say in Europe, too, there are those who have very different agendas.

But if these two could be the motor block of a democratic coalition, where it’s clear that those values are leading, and that they can leverage those values vis-à-vis the outsized power of tech, whether it’s coming from the East or the West, so to say, I think that would be very useful indeed.

And these kinds of steps can also be initiated by smaller countries. The same way that a veto can paralyze the whole EU, a good idea of one or two members can push ahead, and when it comes to the market, the EU is still one. I don’t think I have to tell you it’s more about new regulations or foreign policy where there’s a veto power of member states that it’s challenging. But I do think that when it comes to the single market and even tech policy, the EU is pretty much aligned.

Bethany: I really, really liked her point about innovation and about thinking about it through the lens of who the innovation serves. There is this framing that we all have in life, well, innovation is good. We think of innovation as just an inescapably or a completely positive word. The reality is, when you think about a lot of innovation, there’s innovation that serves people, and there’s innovation that serves companies, and they are not always the same thing. I guess because of her point on the importance of innovation serving the population, I thought of that as a broader view of democracy as well. The two things got linked in my mind. Does that make sense?

Luigi: You are absolutely right that the fundamental question of this is, certainly, should you manage innovation? Some people say you shouldn’t manage innovation at all. Should you manage innovation? And if so, how?

The problem is that there are a lot of redistributional effects of innovation. And, more often than not, if you have a fully democratic system, you can block innovation or delay innovation. Maybe you’re saying, “I don’t care,” because there is something valuable in that. But we need to make sure who makes those decisions in the process. It’s a bit tricky to have a few technocrats making that decision because, as we see, they’re not done very well. They’ll also be subject to an enormous amount of lobbying.

I think it is a very tricky question, to which I don’t think she has very compelling answers, to be honest. She waves the flag of democracy, democracy. If you think that democracy is perfect, then fine. I am a little bit more cynical. I fear that democracy is distorted, and then the question is, how do you deal with that? In the process, do you really want to give all that power to stop things to a system that is very imperfect?

Now, of course, there is also the issue, which is very prominent, of international competition. Think about the old China—the ancient China, not the recent China. Ancient China was a system where they managed innovation, and they managed it to the point that it didn’t really progress very fast. Is that the model we want to copy?

On the other hand, I am with you and with her, I don’t want a free for all, and there are enormous side effects of innovation. I think that the question she asked is really, really important. I think she has good answers on other fronts, but on that front, her answers were not overwhelming to me.

Bethany: I really do appreciate her framing of things, and I really do like trying to think through the lens that not all innovation is good innovation, and we should pause and think about it first. But I think that it is hopeless in the sense that I think that the idea that innovation is good and that the risk of not enough innovation is far worse than the risk of too much innovation is just so deeply bred into the fabric of everything that is American.

The technology industry has successfully played into that and adopted that. I think the idea that, as a country, we’re ever going to change our minds and have any skepticism about innovation is . . . I think it’s too big a historical concept us for us to ever overcome. I’m not sure we should, either, by the way. I could make the argument that the risk of too much innovation is better than the risk of too little innovation.

I did think the point she made about some of the doubts coming out of people who are in positions of power and national security was really interesting. One of the things that was incredibly compelling to me about her book was this idea that so much of the world’s infrastructure, like undersea cables, was in the hands of private companies that nobody really understood or had accountability for and that our world could be just completely demolished by a glitch in one of these things that has no government accountability or oversight.

I have been thinking about that a lot since the CrowdStrike issue, the CrowdStrike Microsoft issue this summer, that it was something just deep in the bowels of a technological update that nobody understood that managed to disrupt life globally for people. That whole aspect of things is a really interesting and frightening and important part of this. I think the idea that it might be people in national security who start to wrestle with this in a profound way is really interesting to me.

Luigi: Yeah. I think she is absolutely right there because it’s not just a risk of accidents, like the one we saw with the update. It is the power that you give to some people. The power that Elon Musk had over the Ukrainian war is unprecedented. Another thing that she mentioned in her book—and I should have known, but I didn’t realize it—is that there is still another investor in Twitter, and that other investor is a Saudi prince.

Bethany: I had forgotten about that, too.

Luigi: That’s pretty revealing because, clearly, he did not invest for the returns, or at least if he did, he would be very pissed at this point. I think he invested for something else, and that’s what we need to be careful about.

Bethany: I’m not so sure that’s true. The Middle East had all of these big conferences for global investors during the very time that the whole Israel-Palestine issue was kicking into high gear, and it’s almost as if the two things went on parallel tracks. There was no interaction in the world of money to the world of politics and what that could mean because the world of money is on its own path to make more money for itself.

That gets to the answer of hers that I didn’t like when I asked her about the power of the market, and she said, “The whole book is a critique of the fact that this is the bottom line.” Well, yes, but how do you change that when you have an actor as powerful as the market and the incentives to make money, which do, on the part of those with money, transcend, for the most part, global politics, governments, anything else? I’m not sure that there’s a way to fix that.

Luigi: I disagree here. I think that at the highest levels, power transcends money. When it comes to the prince of Saudi Arabia, I think he cares about maintaining control of the country over making more money. Investing in a losing enterprise but one that gives a lot of control over its citizens is very worthwhile.

Bethany: Maybe. I think the two are intimately bound up with each other, and maintaining control of power means making money, and making money means maintaining control of power. I don’t think they’re two separate things at all. The two are bound up with each other, but that’s a little bit of a digression from the conversation.

There is also a tension in her argument about democratic governments needing to take responsibility and step up, in a sense, but also, her points about how so many good proposals have come from the small and on the margins, and these unexpected coalitions can form.

There’s an interesting tension there, and it did make me think about our episode with Jonathan Haidt. In the end, what changes things the most might be broad coalitions of parents refusing to put smartphones in the hands of their children and parents refusing to allow children access to social media or states taking action that makes it more difficult for children to have access to social media. That might do more to start changing things than any sort of big government action aimed at social media and speech.

Luigi: But you said it perfectly, Bethany, because that’s exactly the part where she does not have a democratic spirit because the approach of Jonathan Haidt is bottom-up. It is very democratic with a small “d” in front.

She doesn’t seem to rely on it. She doesn’t seem to trust in this. She trusts more of the expert, the agency, the top. That’s where I disagree with her because I think that agency can easily be captured, but the angry parents who see the impact of social media on kids, those cannot be easily captured.

Bethany: Except I think that her book and our conversation with her reveal a tension in her, too, or two different points of view. I think, in our conversation, she was much more pro the idea that change does come from the bottom up and from unexpected places and unexpected coalitions and far less top-down than she comes across in the book.

Maybe the greatest gift of her book is just this increased awareness about how we all think of this because maybe, in the end, it’s true that the change is going to have to come from us—and from people, perhaps, in national security, who understand the ramifications of all of this. But it is going to have to be a bottoms-up groundswell across all these various, different components of tech power that end up changing things. And so, a coalition of parents and a coalition of national-security experts might work to change things more effectively than the election of a new president.